Table of Contents
Introduction
We recently published the article Multi-headed VMWare Gaming Setup where we used VMWare ESXI to run four virtual gaming machines from a single PC. Each virtual machine had its own dedicated GPU and USB controller so there was absolutely no input or display lag while gaming. The setup worked great and the article was very popular, but one limitation we found was that NVIDIA GeForce cards cannot be used as passthough devices in VMWare ESXI. We received feedback from some readers that GeForce cards should work in Linux with KVM (Kernel-based Virtual Machine) so we set out to make a GeForce-based multiheaded gaming PC using Ubuntu 14.04 and KVM.
What we found is that while it is completely possible, getting GPU passthrough to work in a Linux distribution like Ubuntu was not as simple as following a single guide. Once we figured out the process it wasn't too bad, but most of the guides we found are written for Arch Linux rather than Ubuntu. And while both are Linux distributions, there are some differences that made certain key portions of the guide not directly applicable to Ubuntu. Since we already spent the effort of figuring out how to get our multiheaded gaming system working in Ubuntu, we decided to write our own guide based on what we were able to piece together from various sources.
Most of what we figured out was based on the guide KVM VGA-Passthrough using the new vfio-vga support in kernel =>3.9 written by user nbhs. However, this guide is intended for Arch Linux, so there were some things we had to change in order for everything to work in Ubuntu. In addition to the guide above, we also heavily used the following sources:
One thing we won't be covering in this guide is the basic installation of Ubuntu and KVM since there are already a number of guides available. Honestly, if you are unable to install Ubuntu and KVM on your own then this is likely much more advanced of a project than you are probably ready for. However, one guide we will specifically mention is this KVM/Installation guide that we followed to install KVM.
Hardware requirements
There is actually very little in the way of hardware requirements for doing GPU passthroughs with KVM except that the hardware is supported by Ubuntu and that the CPU and Motherboard supports virtualization.
One thing we will mention is that our test system is an Intel-based system and that we will be using NVIDIA GeForce GTX cards for passthrough. You can use an AMD CPU and/or GPU but you may have to tweak some of the instructions in this guide. For the sake of completion, here is our test hardware:
Testing Hardware | |
Motherboard: | Asus P9X79 WS |
CPU: | Intel Xeon E5-2695 v2 12 Core @ 2.4 GHz |
RAM: | 4x Kingston DDR3-1600 8GB ECC Reg. (32GB total) |
GPU: | 1x ASUS Radeon R9 280 3GB DirectCU II (main Ubuntu display) 3x NVIDIA GeForce GTX Titan/Titan Black 6GB (passthrough displays) |
Hard Drive: | Samsung 840 EVO 1TB SATA 6Gb/s SSD |
PSU: | Silverstone ST1500 1500W |
OS: | Ubuntu 14.04 LTS (main OS) Windows 8.1 Pro 64-bit (VM OS) |
One last thing that we will note is that with Linux there are often many ways to do the same thing. In fact, the methods we will be showing in this guide are very possibly not the most efficient way to do this. So if you have an idea or come across a different way to do something, just give it a shot. If you like it better, be sure to let us know in the comments at the end of this article!
Step 1: Edit the Ubuntu modules and bootloader
As we found on this forum post, since we are using the stock Ubuntu kernel one thing we will need to do is add a few missing components necessary to load VFIO (Virtual Finction I/O). VFIO is required to pass full devices through to a virtual machine, so we need to make sure Ubuntu loads everything it needs. To do this, edit the /etc/modules file with the command sudo gedit /etc/modules and add:
pci_stub vfio vfio_iommu_type1 vfio_pci kvm kvm_intel
Next, in order for Ubuntu to load IOMMU properly, we need to edit the Grub cmdline. To do so, enter the command sudo gedit /etc/default/grub to open the grub bootloader file. On the line with "GRUB_CMDLINE_LINUX_DEFAULT", add "intel_iommu=on" to enable IOMMU. On our motherboard, we also needed to add "vfio_iommu_type1.allow_unsafe_interrupts=1" in order to enable interrupt remapping. Depending on your motherboard, this may or may not be necessary. For our system, the boot line looks like:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"
After that, run sudo update-grub to update Grub with the new settings and reboot the system.
Step 2: Blacklist the NVIDIA cards
NVIDIA cards cannot be used by a virtual machine if the base Ubuntu OS is already using them, so in order to keep Ubuntu from wanting to use the NVIDIA cards we have to blacklist them by adding their IDs to the initramfs. Note that you do not want to do this for your primary GPU unless you are prepared to continue the rest of this guide through SSH or some other method of remote console. Credit for this step goes to the superuser.com user genpfault from this question.
-
Use the command lspci -nn | grep NVIDIA . If you are using video cards other than NVIDIA, you can simply use lspci -nn and just search through the output to find the video cards.
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1) 02:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1) 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110B [GeForce GTX Titan Black] [10de:100c] (rev a1) 03:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1) 04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK110 [GeForce GTX Titan] [10de:1005] (rev a1) 04:00.1 Audio device [0403]: NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1)
What we need is actually the ID at the end of each line that we will tell initramfs to blacklist. For our system, the three unique IDs are: 10de:100c, 10de:0e1a, and 10de:1005. Notice that they are not unique IDs per device, but rather per model. Since we have two different models of video cards that we want to pass through to virtual machines (two Titan Blacks and one Titan), we have two different IDs for the GPU. Since both models use the same HDMI audio device, we only have one HDMI ID for all three cards.
-
With these ID's in hand, open initramfs-tools/modules with the command sudo gedit /etc/initramfs-tools/modules and add this line (substituting the IDs for the ones from your system):
pci_stub ids=10de:100c,10de:0e1a,10de:1005
-
After saving the file, rebuild the initramfs with the command sudo update-initramfs -u and reboot the system.
-
After the reboot, check that the cards are being claimed by pci-stub correctly with the command dmesg | grep pci-stub. In our case, we should have 6 devices listed as "claimed by stub". If your devices are not showing up as claimed, first try copy/pasting the ID directly from the terminal into the module file since we found that typing them out sometimes didn't work for some unknown reason..
[ 1.522487] pci-stub: add 10DE:100C sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000 [ 1.522498] pci-stub 0000:02:00.0: claimed by stub [ 1.522509] pci-stub 0000:03:00.0: claimed by stub [ 1.522516] pci-stub: add 10DE:1005 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000 [ 1.522521] pci-stub 0000:04:00.0: claimed by stub [ 1.522527] pci-stub: add 10DE:0E1A sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000 [ 1.522536] pci-stub 0000:02:00.1: claimed by stub [ 1.522544] pci-stub 0000:03:00.1: claimed by stub [ 1.522554] pci-stub 0000:04:00.1: claimed by stub
Note that all six devices are listed as "claimed by stub"
Step 3: Create VFIO config files
In order to bind the video cards to the virual machines, we need to create a config file for each virtual machine. To do this, create a cfg file with the command sudo gedit /etc/vfio-pci#.cfg where # is a unique number for each of your planned virtual machine. Within these files, enter the PCI address for the video card you want to have passed through to the virtual machine. These addresses can be found with the command lspci -nn | grep NVIDIA and are shown at the beginning of each line. Again, if you are not using NVIDIA you can use the messier command lspci -nn and hunt down your video cards. For our setup, we ended up these with three .cfg files:
/etc/vfio-pci1.cfg
0000:02:00.0 0000:02:00.1
/etc/vfio-pci2.cfg
0000:03:00.0 0000:03:00.1
/etc/vfio-pci3.cfg
0000:04:00.0 0000:04:00.1
Step 4: Create virtual disk(s)
Most of the prep work is done at this point, but before we configure our first virtual machine we first need to create a virtual disk for the virtual machine to use. To do this, repeat the following command for as many virtual machines as you want:
dd if=/dev/zero of=windows#.img bs=1M seek=size count=0
where windows#.img is a unique name for each virual machine image and size is the size of the image you want in GB * 1000. If you want roughly a 80GB image, enter 80000. We want a 120GB image, so we entered 120000. By default, this img file will be created in your /home/user folder.
Step 5: Create a script to run each virtual machine
We need to be able to create a very custom virtual machine which simply is not possible with any GUI-based virtual machine manager in Ubuntu that we know of. Using a script also allows us to bind the video card to VFIO right before running the virtual machine instead of getting into startup scripts like the Arch Linux guide uses. Credit goes to heiko_s on the Ubuntu forums for the nice script below.
What this script does is fist bind the video card to VFIO based on the .cfg file we created a few steps back. After that, it creates a virtual machine that uses both the video card we specify and the image we just made in the previous step.
To make the script, enter the command sudo gedit /usr/vm# where # is the unique identifier for that virtual machine. Next, copy the script below into this file. Be sure to change anything in bold to match your configuration:
#!/bin/bash configfile=/etc/vfio-pci#.cfg vfiobind() { dev="$1" vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/$dev/driver ]; then echo $dev > /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id } modprobe vfio-pci cat $configfile | while read line;do echo $line | grep ^# >/dev/null 2>&1 && continue vfiobind $line done sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \ -smp 4,sockets=1,cores=4,threads=1 \ -bios /usr/share/qemu/bios.bin -vga none \ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \ -drive file=/home/puget/windows#.img,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \ -drive file=/home/puget/Downloads/Windows.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \ -boot menu=on exit 0
Be sure to edit the # to be the unique identifier for this virtual machine and that the "/etc/vfio-pci#.cfg" file corresponds to the PCI addresses in the "-device vfio-pci" lines. You may also want to edit the amount of RAM the virtual machine will get ("-m 4096" will give 4096MB or 4GB of RAM) and the number of CPU cores and sockets ("-smp 4,sockets=1,cores=4,threads=1" will give a single socket, 4 core vCPU without hyperthreading).
One additional thing you can do is directly mount an ISO of whatever OS you want to install. The ISO we used was named Windows.ISO and is located in the Downloads folder. Simply change this location to point to whatever ISO you want to install from.
Once the script is configured how you want it, save it then enter the command sudo chmod 755 /usr/vm# to make the script executable.
Step 6: Start the virtual machine
At this point, everything should be configured to allow the video card to be properly passed through to the virtual machine. Give the system one more reboot just to be sure everything took correctly and plug a monitor into the video card you have set to be passed through. Start the virtual machine with the command sudo /usr/vm# where # is the unique identifier for that virtual machine. If everything was done properly, a black window titled "QEMU" should show up in Ubuntu and you should get a display on your virtual machine's monitor. However, don't be surprised or disappointed if you get an error.
If you get an error, go back through this guide to make sure you didn't miss anything. If you are sure you didn't miss anything then it is probably a problem unique to your hardware. Unfortunately, all we can really say is "good luck" and have some fun googling the error you are getting. Most likely there is something slightly different about your hardware that requires a little bit different setup and configuration. This is simply the joys of Linux. Luckily, Ubuntu and Linux in general has a very active community so you are very likely to find the solution to your error if you do enough digging.
Step 7: Add USB support
Getting a NVIDIA GeForce card to pass through to a virtual machine is great, but we still need a way to actually install and use an OS on the virtual machine. To do this, we need to add USB support to the virtual machine. In our opinion, the best way to do this is to simply pass through an entire USB controller much like what we just did with a video card. However, we have found that some USB controllers simply don't like to be used as a passthrough. If that happens, you will need to pass through individual USB devices.
USB Controller Pass-through
To pass through an entire USB controller, first use lspci -nn | grep USB to find the PCI address of the USB controller you want to pass through. Then, add the address to your /etc/vfio-pci#.cfg file as a new line just like what we did earlier for the video card.
Next, add the controller to your virtual machine script file with the command sudo gedit /usr/vm# . To do this, add the following line:
-device vfio-pci,host=00:1d.0,bus=pcie.0 \
only replace the 00:1d.0 with the address of your USB controller. If you are lucky, it will work without a hitch. If you are not lucky, there a number of reasons you may not be able to pass through that specific controller.
If you get an error, you might simply try a different controller. On our system, we were able to pass through the USB 2.0 controllers without a problem, but could not get the USB 3.0 controllers to work due to a problem with there being additional devices in their IOMMU group. We were unable to solve that issue, so for our system we ended up passing through individual USB devices instead of the entire controller.
USB Device Pass-through
If you run into problems passing through an entire USB controller that you cannot solve, the other option is to pass through individual USB devices. This is actually easier in many ways, but USB device addresses like to change randomly so you may find that you need to edit the virtual machine script any time you reboot the machine or add/change a USB device.
To start, use the command lsusb to show the USB devices currently connected to your system. In our case, we are creating three virtual machines so we have three additional sets of keyboards and mice plugged in. The relevant part of our lsusb output looks like:
Bus 002 Device 017: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0 Bus 002 Device 016: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0 Bus 002 Device 015: ID 045e:00cb Microsoft Corp. Basic Optical Mouse v2.0 Bus 002 Device 013: ID 045e:07f8 Microsoft Corp. Bus 002 Device 014: ID 045e:07f8 Microsoft Corp. Bus 002 Device 011: ID 045e:07f8 Microsoft Corp.
Most guides for KVM will say to use the ID to pass through USB devices (like 045e:00cb), but note that the ID is unique by model, not device. Since we have multiple devices of the same model, we instead have to use the bus and device numbers. The ID is more reliable so you should use that if possible, but if you have multiples of the same model you have to do it by bus/device number. To do this, add one of the follow lines to your /usr/vm# script for the USB devices you want to pass through to the virtual machine.
By ID:
-usb -usbdevice host:045e:00cb -usbdevice host:045e:07f8 \
By bus and device:
-usb -device usb-host,hostbus=2,hostaddr=17 -device usb-host,hostbus=2,hostaddr=13 \
Be sure to change the parts in bold to match your hardware. If you find that your USB device no longer works either randomly or after a reboot, rerun lsusb to find out if the device number has changed.
Congratulations! You are done!
There are plenty of other options in KVM that you can experiment with, but at this point you should have a virtual machine (or multiple virtual machines) up and running – each with their own dedicated video card and keyboard/mouse. Simply install your OS of choice and enjoy your multiheaded gaming system!
If you are interested in how well this works or want to find out more about how this could be used in the real world, be sure to check out our Multi-headed VMWare Gaming Setup article.
One thing we will say is that after using both VMWare ESXI and Ubuntu+KVM to make a multiheaded gaming PC, VMWare by far the easier and more reliable method. Things like being able to pass through all our USB controllers without any problems and the vSphere client to allow us to easily administer the virtual machines over the network made VMWare much easier to use. It is limited to AMD Radeon and NVIDIA Quadro cards, but even with that limitation it is still the method we would recommend if you planning on building a multiheaded gaming PC.